negative cycle
Resource Constrained Pathfinding with A* and Negative Weights
Ahmadi, Saman, Raith, Andrea, Jalili, Mahdi
Constrained pathfinding is a well-studied, yet challenging network optimisation problem that can be seen in a broad range of real-world applications. Pathfinding with multiple resource limits, which is known as the Resource Constrained Shortest Path Problem (RCSP), aims to plan a cost-optimum path subject to limited usage of resources. Given the recent advances in constrained and multi-criteria search with A*, this paper introduces a new resource constrained search framework on the basis of A* to tackle RCSP in large networks, even in the presence of negative cost and negative resources. We empirically evaluate our new algorithm on a set of large instances and show up to two orders of magnitude faster performance compared to state-of-the-art RCSP algorithms in the literature.
- Europe > Austria > Vienna (0.14)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- Oceania > Australia (0.04)
- (5 more...)
Sequential Learning of the Pareto Front for Multi-objective Bandits
Crépon, Elise, Garivier, Aurélien, Koolen, Wouter M
We study the problem of sequential learning of the Pareto front in multi-objective multi-armed bandits. An agent is faced with K possible arms to pull. At each turn she picks one, and receives a vector-valued reward. When she thinks she has enough information to identify the Pareto front of the different arm means, she stops the game and gives an answer. We are interested in designing algorithms such that the answer given is correct with probability at least 1-$\delta$. Our main contribution is an efficient implementation of an algorithm achieving the optimal sample complexity when the risk $\delta$ is small. With K arms in d dimensions p of which are in the Pareto set, the algorithm runs in time O(Kp^d) per round.
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Europe > United Kingdom (0.04)
- Europe > Spain (0.04)
An Algorithmic Theory of Simplicity in Mechanism Design
Ferraioli, Diodato, Ventre, Carmine
A growing body of work in economics and computation focuses on the trade-off between implementability and simplicity in mechanism design. The goal is to develop a theory that not only allows to design an incentive structure easy to grasp for imperfectly rational agents, but also understand the ensuing limitations on the class of mechanisms that enforce it. In this context, the concept of OSP mechanisms has assumed a prominent role since they provably account for the absence of contingent reasoning skills, a specific cognitive limitation. For single-dimensional agents, it is known that OSP mechanisms need to use certain greedy algorithms. In this work, we introduce a notion that interpolates between OSP and SOSP, a more stringent notion where agents only plan a subset of their own future moves. We provide an algorithmic characterization of this novel class of mechanisms for single-dimensional domains and binary allocation problems, that precisely measures the interplay between simplicity and implementability. We build on this to show how mechanisms based on reverse greedy algorithms (a.k.a., deferred acceptance auctions) are algorithmically more robust to imperfectly rationality than those adopting greedy algorithms.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Netherlands > Limburg > Maastricht (0.04)
- (2 more...)
An Efficient Incremental Simple Temporal Network Data Structure for Temporal Planning
One popular technique to solve temporal planning problems consists in decoupling the causal decisions, demanding them to heuristic search, from temporal decisions, demanding them to a simple temporal network (STN) solver. In this architecture, one needs to check the consistency of a series of STNs that are related one another, therefore having methods to incrementally re-use previous computations and that avoid expensive memory duplication is of paramount importance. In this paper, we describe in detail how STNs are used in temporal planning, we identify a clear interface to support this use-case and we present an efficient data-structure implementing this interface that is both time- and memory-efficient. We show that our data structure, called \deltastn, is superior to other state-of-the-art approaches on temporal planning sequences of problems.
- North America > United States > Oklahoma > Payne County > Cushing (0.04)
- North America > United States > Florida > Monroe County > Key West (0.04)
- North America > Jamaica > St. James > Montego Bay (0.04)
- (3 more...)
- Telecommunications > Networks (0.40)
- Information Technology > Networks (0.40)
Dynamic Controllability of Temporal Plans in Uncertain and Partially Observable Environments
Bit-Monnot, Arthur (a:1:{s:5:"en_US";s:9:"LAAS-CNRS";}) | Morris, Paul (NASA Ames Research Center)
The formalism of Simple Temporal Networks (STNs) provides methods for evaluating the feasibility of temporal plans. The basic formalism deals with the consistency of quantitative temporal requirements on scheduled events. This implicitly assumes a single agent has full control over the timing of events. The extension of Simple Temporal Networks with Uncertainty (STNU) introduces uncertainty into the timing of some events. Two main approaches to the feasibility of STNUs involve (1) where a single schedule works irrespective of the duration outcomes, called Strong Controllability, and (2) whether a strategy exists to schedule future events based on the outcomes of past events, called Dynamic Controllability. Case (1) essentially assumes the timing of uncertain events cannot be observed by the agent while case (2) assumes full observability. The formalism of Partially Observable Simple Temporal Networks with Uncertainty (POSTNU) provides an intermediate stance between these two extremes, where a known subset of the uncertain events can be observed when they occur. A sound and complete polynomial algorithm to determining the Dynamic Controllability of POSTNUs has not previously been known; we present one in this paper. This answers an open problem that has been posed in the literature. The approach we take factors the problem into Strong Controllability micro-problems in an overall Dynamic Controllability macro-problem framework. It generalizes the notion of labeled distance graph from STNUs. The generalized labels are expressed as max/min expressions involving the observables. The paper introduces sound generalized reduction rules that act on the generalized labels. These incorporate tightenings based on observability that preserve dynamic viable strategies. It is shown that if the generalized reduction rules reach quiescence without exposing an inconsistency, then the POSTNU is Dynamically Controllable (DC). The paper also presents algorithms that apply the reduction rules in an organized way and reach quiescence in a polynomial number of steps if the POSTNU is Dynamically Controllable. Remarkably, the generalized perspective leads to a simpler and more uniform framework that applies also to the STNU special case. It helps illuminate the previous methods inasmuch as the max/min label representation is more semantically clear than the ad-hoc upper/lower case labels previously used.
- North America > United States (0.45)
- Africa > Middle East > Egypt > Cairo Governorate > Cairo (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
On the Connection between Greedy Algorithms and Imperfect Rationality
Ferraioli, Diodato, Ventre, Carmine
The design of algorithms or protocols that are able to align the goals of the planner with the selfish interests of the agents involved in these protocols is of paramount importance in almost every decentralized setting (such as, computer networks, markets, etc.) as shown by the rich literature in Mechanism Design. Recently, huge interest has been devoted to the design of mechanisms for imperfectly rational agents, i.e., mechanisms for which agents are able to easily grasp that there is no action different from following the protocol that would satisfy their interests better. This work has culminated in the definition of Obviously Strategyproof (OSP) Mechanisms, that have been shown to capture the incentives of agents without contingent reasoning skills. Without an understanding of the algorithmic nature of OSP mechanisms, it is hard to assess how well these mechanisms can satisfy the goals of the planner. For the case of binary allocation problems and agents whose private type is a single number, recent work has shown that a generalization of greedy completely characterizes OSP. In this work, we strengthen the connection between greedy and OSP by providing a characterization of OSP mechanisms for all optimization problems involving these single-parameter agents. Specifically, we prove that OSP mechanisms must essentially work as follows: they either greedily look for agents with ``better'' types and allocate them larger outcomes; or reverse greedily look for agents with ``worse'' types and allocate them smaller outcomes; or, finally, split the domain of agents in ``good'' and ``bad'' types, and subsequently proceed in a reverse greedy fashion for the former and greedily for the latter. We further demonstrate how to use this characterization to give bounds on the approximation guarantee of OSP mechanisms for the well known scheduling related machines problem.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- (7 more...)
Finally, a Fast Algorithm for Shortest Paths on Negative Graphs
In algorithms, as in life, negativity can be a drag. Consider the problem of finding the shortest path between two points on a graph -- a network of nodes connected by links, or edges. Often, these edges aren't interchangeable: A graph could represent a road map on which some roads are slower than others or have higher tolls. Computer scientists account for these differences by pairing each edge with a "weight" that quantifies the cost of moving across that segment -- whether that cost represents time, money or something else. Since the 1970s, they've known how to find shortest paths essentially as fast as theoretically possible, assuming all weights are positive numbers. But on some graphs weights can be negative -- traveling along one segment can offset the cost of traversing another.
Fast Optimal Clearing of Capped-Chain Barter Exchanges
Plaut, Benjamin (Carnegie Mellon University) | Dickerson, John P. (Carnegie Mellon University) | Sandholm, Tuomas (Carnegie Mellon University)
Kidney exchange is a type of barter market where patients exchange willing but incompatible donors. These exchanges are conducted via cycles---where each incompatible patient-donor pair in the cycle both gives and receives a kidney---and chains, which are started by an altruist donor who does not need a kidney in return. Finding the best combination of cycles and chains is hard. The leading algorithms for this optimization problem use either branch and price — a combination of branch and bound and column generation — or constraint generation. We show a correctness error in the leading prior branch-and-price-based approach [Glorie et al. 2014]. We develop a provably correct fix to it, which also necessarily changes the algorithm's complexity, as well as other improvements to the search algorithm. Next, we compare our solver to the leading constraint-generation-based solver and to the best prior correct branch-and-price-based solver. We focus on the setting where chains have a length cap. A cap is desirable in practice since if even one edge in the chain fails, the rest of the chain fails: the cap precludes very long chains that are extremely unlikely to execute and instead causes the solution to have more parallel chains and cycles that are more likely to succeed. We work with the UNOS nationwide kidney exchange, which uses a chain cap. Algorithms from our group autonomously make the transplant plans for that exchange. On that real data and demographically-accurate generated data, our new solver scales significantly better than the prior leading approaches.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom (0.04)
ITSAT: An Efficient SAT-Based Temporal Planner
Rankooh, Masood Feyzbakhsh, Ghassem-Sani, Gholamreza
Planning as satisfiability is known as an efficient approach to deal with many types of planning problems. However, this approach has not been competitive with the state-space based methods in temporal planning. This paper describes ITSAT as an efficient SAT-based (satisfiability based) temporal planner capable of temporally expressive planning. The novelty of ITSAT lies in the way it handles temporal constraints of given problems without getting involved in the difficulties of introducing continuous variables into the corresponding satisfiability problems. We also show how, as in SAT-based classical planning, carefully devised preprocessing and encoding schemata can considerably improve the efficiency of SAT-based temporal planning. We present two preprocessing methods for mutex relation extraction and action compression. We also show that the separation of causal and temporal reasoning enables us to employ compact encodings that are based on the concept of parallel execution semantics. Although such encodings have been shown to be quite effective in classical planning, ITSAT is the first temporal planner utilizing this type of encoding. Our empirical results show that not only does ITSAT outperform the state-of-the-art temporally expressive planners, it is also competitive with the fast temporal planners that cannot handle required concurrency.
- North America > United States > Oklahoma > Payne County > Cushing (0.04)
- Europe > Greece > Central Macedonia > Thessaloniki (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- (17 more...)
Probabilistic Planning with Risk-Sensitive Criterion
Hou, Ping (New Mexico State University)
While probabilistic planning models have been extensively used by AI and Decision Theoretic communities for planning under uncertainty, the objective to minimize the expected cumulative cost is inappropriate for high-stake planning problems. With this motivation in mind, we revisit the Risk-Sensitive criterion (RS-criterion), where the objective is to find a policy that maximizes the probability that the cumulative cost is within some user-defined cost threshold. The overall scope of this research is to develop efficient and scalable algorithms to optimize the RS-criterion in probabilistic planning problems. In our recent paper (Hou, Yeoh, and Varakantham 2014), we formally defined Risk-Sensitive MDPs (RS-MDPs) and introduced new algorithms for RS-MDPs with non-negative costs. Next, my plan is to develop algorithm for RS-MDPs with negative cost cycles and for Risk-Sensitive POMDPs (RS-POMDPs).